A spectre is haunting the world: the spectre of Artificial Intelligence. Hardly a day passes by without some celebration of the potential of AI to usher in a new utopia or a clarion call for action on the part of governments, corporations and other stakeholders to mitigate the serious risks it poses to humanity.
While larger philosophical debates continue about the appropriateness of the term AI itself, the nature of intelligence, the existential dangers of AI and the feasibility of attaining the holy grail of Artificial General Intelligence that surpass human cognitive abilities, AI is already a reality. It is transforming our lives in routine and banal ways as well as in ways that we cannot perhaps quite fathom.
What does AI mean for the nation-state? Arguably, the most significant aspect of the impact of AI is that it will massively expand the surveillance capabilities of the modern nation-state, a process that is already underway in a number of societies. French historian Michel Foucault’s seminal work on the relationship between power and knowledge has shown how the logic of surveillance is central to the emergence of the Western nation-state and all the institutions of modern life, from the prison to the hospital and the school to the office.
Well before the new high noon of AI, we have been inhabiting an age of Big Data, in which all dimensions of our lives are tracked, recorded, sliced and diced, combined, commoditised and monetised.
In her 2014 book Dragnet Nation, Julia Angwin described this situation, as it held for the US, as one in which vast amounts of information were routinely and indiscriminately gathered by both the state and private actors, with serious implications for the privacy and freedom of citizens.
Angwin dates the origin of the dragnet nation to the US security state that took shape in the aftermath of the terrorist attacks of September 11, 2001. The dragnet nation was forged through two distinct imperatives: the goal of the state to collect information on its inhabitants for the purposes of security and the goal of private corporations to make profits.
Data collected ostensibly for one purpose was combined with data collected for the other objective, packaged in multiple ways, and sold back by the private sector to the government or to other corporate entities.
With AI, the scale, power, and pace of surveillance and data gathering will expand enormously, intensifying existing concerns about its impact on democracy across the globe. While China is often cited as an example of the possible AI-powered dystopia that awaits us all, no society is necessarily immune from these risks.
For India, AI is said to hold remarkable possibilities of all sorts, from simplifying onerous bureaucratic processes to radically democratising access to education. Perhaps, and hopefully, at least some of this will materialise. A serious concern though is that AI development in India, undergirded by a public-private partnership, will exacerbate discrimination and violence – physical, structural or symbolic – against its most vulnerable groups.
Studies, mostly in the American context, show that technologies such as facial recognition or predictive policing often reflect algorithmic bias against particular groups such as African-Americans. With the ascendancy of Hindutva under Narendra Modi since 2014, religious and caste prejudice against minority groups are already thoroughly normalised, and, in fact, are squarely consistent with the ideology of the ruling party.
Over the last decade in India, democratic institutions, constitutional safeguards, and rights have been severely undermined by the actions and policies of the Modi government.
What happens when, in such a situation, artificially intelligent technologies and applications massively ramp up the potential for religious and caste bias and, consequently, for abuse? What level of detail do we currently have about the datasets on which AI technologies meant for state and private use in India are trained? What kinds of assumptions might be embedded and encoded within them? Will a ruling dispensation notorious for opacity, thin-skinned to the point of paranoia, and known for targeting critics and dissenters even allow any conversation or examination of these matters?
Similar worries about India's ambitious Aadhaar biometric project have not necessarily led to robust safeguards, with concerns still on the horizon.
The questions bear a universal urgency, but a few factors make them more acute in the Indian context. In the US and Western Europe, for instance, state actors, as distinct from the government of the day, have a relatively greater degree of autonomy and protections from political pressures, even if they may not entirely be immune from them.
With a more robust and well-funded segment of civil society organisations focused on the subject, even if their resources pale in comparison with technology firms, there is at least something of a national conversation on the subject in these societies.
India is not unique in its large number of bad actors misusing generative AI for generating images of dubious provenance and spreading fake news about particular groups or individuals. The pronouncements of Donald Trump for years, and now of Elon Musk, as owner and chief spreader of lies on X, the platform he owns, match those of Hindutva trolls in their vitriol and impact on minorities.
The main point of contrast between India and Western democracies is that the Indian state under the Modi government has made weaponising misinformation practically an instrument of governance, something it executes through the Bharatiya Janata Party’s “IT cell”, sycophantic media organisations and journalists, and proxies such as online Hindutva groups.
At least one analysis suggests that during the last Indian general election of 2024, the positives of AI seemed to outweigh the negatives. We do not know yet at a granular level what the implications of AI will be for modern warfare and national security capabilities of the state. In the US, newer companies like Palantir and Anduril as well as established behemoths like Microsoft are thick in the midst of developing AI for military and national security purposes.
Neo-Luddite alarmism about AI serves no purpose, and would be like King Canute trying to fight the tides. The main question that must animate our discussions on AI in India is whether in the trade-offs involved in incorporating AI in the realms of statecraft and social practice, some groups disproportionately bear the costs while others largely reap the benefits.
Even as AI development strengthens the power of the Indian state and of corporations that work with the state, will it exacerbate the vulnerability and fragility of those who might already be amongst the weakest of us?
The Modi government, as all Indian governments before it, have shown a remarkable continuity in following the ideology of the developmental state under Nehru. Even if the emphasis on development through centralised planning has been replaced by a different, market-friendly vocabulary and ideological frame, the obsessions with the idea of progress and the ambitions to reengineer Indian society through top-down measures such as demonetisation clearly persist.
It is likely that the policy discussion and rhetoric on AI will be articulated within the same ideological framework. India would do well not to repeat the policy mistakes of its past obsessions with development and planning, with their histories of violence and exploitation and legacies of unequal benefits and costs.
Rohit Chopra is Professor of Communication at Santa Clara University